总生存时间(OS)时间是神经胶质瘤情况最重要的评估指数之一。多模式磁共振成像(MRI)扫描在神经胶质瘤预后OS时间的研究中起重要作用。为多模式MRI问题的OS时间预测提出了几种基于学习的方法。但是,这些方法通常在深度学习网络开始或结束时融合多模式信息,并且缺乏来自不同尺度的特征。此外,网络末尾的融合始终适应全球(例如,在全球平均池输出串联后完全连接)或与局部(例如,双线性池)的融合,这会失去与全球局部的局部信息。在本文中,我们提出了一种用于对脑肿瘤患者的多模式OS时间预测的新方法,该方法包含在不同尺度上引入的改进的非局部特征融合模块。我们的方法比当前最新方法获得了相对8.76%的改善(0.6989 vs. 0.6426的精度)。广泛的测试表明,我们的方法可以适应缺失方式的情况。该代码可在https://github.com/tangwen920812/mmmna-net上找到。
translated by 谷歌翻译
在临床实践中,由于较短的获取时间和较低的存储成本,通常使用了平面分辨率低的各向异性体积医学图像。然而,粗分辨率可能导致医生或计算机辅助诊断算法的医学诊断困难。基于深度学习的体积超分辨率(SR)方法是改善分辨率的可行方法,其核心是卷积神经网络(CNN)。尽管进展最近,但这些方法受到卷积运算符的固有属性的限制,卷积运算符忽略内容相关性,无法有效地对远程依赖性进行建模。此外,大多数现有方法都使用伪配合的体积进行训练和评估,其中伪低分辨率(LR)体积是通过简单的高分辨率(HR)对应物的简单降解而产生的。但是,伪和现实LR之间的域间隙导致这些方法在实践中的性能不佳。在本文中,我们构建了第一个公共实用数据集RPLHR-CT作为体积SR的基准,并通过重新实现四种基于CNN的最先进的方法来提供基线结果。考虑到CNN的固有缺点,我们还提出了基于注意力机制的变压器体积超分辨率网络(TVSRN),完全与卷积分配。这是首次将纯变压器用于CT体积SR的研究。实验结果表明,TVSRN在PSNR和SSIM上的所有基准都显着胜过。此外,TVSRN方法在图像质量,参数数量和运行时间之间取得了更好的权衡。数据和代码可在https://github.com/smilenaxx/rplhr-ct上找到。
translated by 谷歌翻译
通过纵向病变跟踪评估病变进展和治疗反应在临床实践中起着至关重要的作用。当手动进行病变匹配时,该任务的自动化方法是由劳动力成本和时间消耗的促进的。以前的方法通常缺乏本地和全球信息的集成。在这项工作中,我们提出了一种基于变压器的方法,称为变压器病变跟踪器(TLT)。具体而言,我们设计了一个基于注意力的变压器(CAT),以捕获和组合全球和本地信息以增强特征提取。我们还开发了一个基于注册的解剖注意模块(RAAM),以向CAT介绍解剖信息,以便它可以专注于有用的特征知识。提出了一种稀疏选择策略(SSS),用于选择特征和减少变压器训练中的内存足迹。此外,我们使用全球回归来进一步提高模型性能。我们在公共数据集上进行实验,以显示我们方法的优势,并发现我们的模型性能使欧几里得中心的平均误差至少提高了至少14.3%(6mm vs. 7mm),而不是先进的ART(SOTA) )。代码可在https://github.com/tangwen920812/tlt上找到。
translated by 谷歌翻译
我们研究是否使用两个条件型号$ p(x | z)$和$ q(z | x)$,以使用循环的两个条件型号,我们如何建模联合分配$ p(x,z)$。这是通过观察到深入生成模型的动机,除了可能的型号$ p(x | z)$,通常也使用推理型号$ q(z | x)$来提取表示,但它们通常依赖不表征的先前分配$ P(z)$来定义联合分布,这可能会使后塌和歧管不匹配等问题。为了探讨仅使用$ p(x | z)$和$ q(z | x)$模拟联合分布的可能性,我们研究其兼容性和确定性,对应于其条件分布一致的联合分布的存在和唯一性跟他们。我们为可操作的等价标准开发了一般理论,以实现兼容性,以及足够的确定条件。基于该理论,我们提出了一种新颖的生成建模框架来源,仅使用两个循环条件模型。我们开发方法以实现兼容性和确定性,并使用条件模型适合和生成数据。通过预先删除的约束,Cygen更好地适合数据并捕获由合成和现实世界实验支持的更多代表性特征。
translated by 谷歌翻译
传统的监督学习方法,尤其是深的学习方法,发现对分发超出(OOD)示例敏感,主要是因为所学习的表示与由于其域特异性相关性的变异因子混合了语义因素,而只有语义因子导致输出。为了解决这个问题,我们提出了一种基于因果推理的因果语义生成模型(CSG),以便分别建模两个因素,以及从单个训练域中的oo ood预测的制定方法,这是常见和挑战的。该方法基于因果不变原理,在变形贝斯中具有新颖的设计,用于高效学习和易于预测。从理论上讲,我们证明,在某些条件下,CSG可以通过拟合训练数据来识别语义因素,并且这种语义识别保证了泛化概率的界限和适应的成功。实证研究表明,改善了卓越的基线表现。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Neural models with an encoder-decoder framework provide a feasible solution to Question Generation (QG). However, after analyzing the model vocabulary we find that current models (both RNN-based and pre-training based) have more than 23\% inflected forms. As a result, the encoder will generate separate embeddings for the inflected forms, leading to a waste of training data and parameters. Even worse, in decoding these models are vulnerable to irrelevant noise and they suffer from high computational costs. In this paper, we propose an approach to enhance the performance of QG by fusing word transformation. Firstly, we identify the inflected forms of words from the input of encoder, and replace them with the root words, letting the encoder pay more attention to the repetitive root words. Secondly, we propose to adapt QG as a combination of the following actions in the encode-decoder framework: generating a question word, copying a word from the source sequence or generating a word transformation type. Such extension can greatly decrease the size of predicted words in the decoder as well as noise. We apply our approach to a typical RNN-based model and \textsc{UniLM} to get the improved versions. We conduct extensive experiments on SQuAD and MS MARCO datasets. The experimental results show that the improved versions can significantly outperform the corresponding baselines in terms of BLEU, ROUGE-L and METEOR as well as time cost.
translated by 谷歌翻译